skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Kubsch, Marcus"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Uncertainty is ubiquitous in science, but scientific knowledge is often represented to the public and in educational contexts as certain and immutable. This contrast can foster distrust when scientific knowledge develops in a way that people perceive as a reversals, as we have observed during the ongoing COVID-19 pandemic. Drawing on research in statistics, child development, and several studies in science education, we argue that a Bayesian approach can support science learners to make sense of uncertainty. We provide a brief primer on Bayes’ theorem and then describe three ways to make Bayesian reasoning practical in K-12 science education contexts. There are a) using principles informed by Bayes’ theorem that relate to the nature of knowing and knowledge, b) interacting with a web-based application (or widget—Confidence Updater) that makes the calculations needed to apply Bayes’ theorem more practical, and c) adopting strategies for supporting even young learners to engage in Bayesian reasoning. We conclude with directions for future research and sum up how viewing science and scientific knowledge from a Bayesian perspective can build trust in science. 
    more » « less
  2. Abstract Machine learning (ML) has become commonplace in educational research and science education research, especially to support assessment efforts. Such applications of machine learning have shown their promise in replicating and scaling human‐driven codes of students' work. Despite this promise, we and other scholars argue that machine learning has not yet achieved its transformational potential. We argue that this is because our field is currently lacking frameworks for supporting creative, principled, and critical endeavors to use machine learning in science education research. To offer considerations for science education researchers' use of ML, we present a framework, Distributing Epistemic Functions and Tasks (DEFT), that highlights the functions and tasks that pertain to generating knowledge that can be carried out by either trained researchers or machine learning algorithms. Such considerations are critical decisions that should occur alongside those about, for instance, the type of data or algorithm used. We apply this framework to two cases, one that exemplifies the cutting‐edge use of machine learning in science education research and another that offers a wholly different means of using machine learning and human‐driven inquiry together. We conclude with strategies for researchers to adopt machine learning and call for the field to rethink how we prepare science education researchers in an era of great advances in computational power and access to machine learning methods. 
    more » « less